简介:人工智能(AI)有可能促进CMR分析以进行生物标志物提取的自动化。但是,大多数AI算法都经过特定输入域(例如单扫描仪供应商或医院量化成像协议)的培训,并且当从其他输入域中应用于CMR数据时,缺乏最佳性能的鲁棒性。方法:我们提出的框架包括一种基于AI的算法,用于对短轴图像的双脑室分割,然后进行分析后质量控制,以检测错误的结果。分割算法在来自两家NHS医院(n = 2793)的大型临床CMR扫描数据集上进行了培训,并在此数据集(n = 441)和五个外部数据集(n = 6808)上进行了验证。验证数据包括使用所有主要供应商的CMR扫描仪在12个不同中心获得的一系列疾病的患者的CMR扫描。结果:我们的方法产生的中位骰子得分超过87%,转化为观察者间变异范围内心脏生物标志物中的中值绝对错误:<8.4ml(左心室),<9.2ml(右心室),<13.3G(左心室),<13.3G(左心室所有数据集的心室质量),<5.9%(射血分数)。根据心脏疾病和扫描仪供应商的表型的病例分层显示出良好的一致性。结论:我们表明,我们提出的工具结合了在大规模多域CMR数据集中训练的最先进的AI算法和分析后质量控制,使我们能够从多个中心,供应商和心脏病。这是AI算法临床翻译的基本步骤。此外,我们的方法以无需额外的计算成本而产生一系列心脏功能(填充和弹出率,区域壁运动和应变)的附加生物标志物。
translated by 谷歌翻译
Transformer-based language models have been shown to be highly effective for several NLP tasks. In this paper, we consider three transformer models, BERT, RoBERTa, and XLNet, in both small and large version, and investigate how faithful their representations are with respect to the semantic content of texts. We formalize a notion of semantic faithfulness, in which the semantic content of a text should causally figure in a model's inferences in question answering. We then test this notion by observing a model's behavior on answering questions about a story after performing two novel semantic interventions -- deletion intervention and negation intervention. While transformer models achieve high performance on standard question answering tasks, we show that they fail to be semantically faithful once we perform these interventions for a significant number of cases (~50% for deletion intervention, and ~20% drop in accuracy for negation intervention). We then propose an intervention-based training regime that can mitigate the undesirable effects for deletion intervention by a significant margin (from ~50% to ~6%). We analyze the inner-workings of the models to better understand the effectiveness of intervention-based training for deletion intervention. But we show that this training does not attenuate other aspects of semantic unfaithfulness such as the models' inability to deal with negation intervention or to capture the predicate-argument structure of texts. We also test InstructGPT, via prompting, for its ability to handle the two interventions and to capture predicate-argument structure. While InstructGPT models do achieve very high performance on predicate-argument structure task, they fail to respond adequately to our deletion and negation interventions.
translated by 谷歌翻译
Only limited studies and superficial evaluations are available on agents' behaviors and roles within a Multi-Agent System (MAS). We simulate a MAS using Reinforcement Learning (RL) in a pursuit-evasion (a.k.a predator-prey pursuit) game, which shares task goals with target acquisition, and we create different adversarial scenarios by replacing RL-trained pursuers' policies with two distinct (non-RL) analytical strategies. Using heatmaps of agents' positions (state-space variable) over time, we are able to categorize an RL-trained evader's behaviors. The novelty of our approach entails the creation of an influential feature set that reveals underlying data regularities, which allow us to classify an agent's behavior. This classification may aid in catching the (enemy) targets by enabling us to identify and predict their behaviors, and when extended to pursuers, this approach towards identifying teammates' behavior may allow agents to coordinate more effectively.
translated by 谷歌翻译
We discuss a platform that has both software and hardware components, and whose purpose is to support research into characterizing and mitigating the sim-to-real gap in robotics and vehicle autonomy engineering. The software is operating-system independent and has three main components: a simulation engine called Chrono, which supports high-fidelity vehicle and sensor simulation; an autonomy stack for algorithm design and testing; and a development environment that supports visualization and hardware-in-the-loop experimentation. The accompanying hardware platform is a 1/6th scale vehicle augmented with reconfigurable mountings for computing, sensing, and tracking. Since this vehicle platform has a digital twin within the simulation environment, one can test the same autonomy perception, state estimation, or controls algorithms, as well as the processors they run on, in both simulation and reality. A demonstration is provided to show the utilization of this platform for autonomy research. Future work will concentrate on augmenting ART/ATK with support for a full-sized Chevy Bolt EUV, which will be made available to this group in the immediate future.
translated by 谷歌翻译
Missing data is a common concern in health datasets, and its impact on good decision-making processes is well documented. Our study's contribution is a methodology for tackling missing data problems using a combination of synthetic dataset generation, missing data imputation and deep learning methods to resolve missing data challenges. Specifically, we conducted a series of experiments with these objectives; $a)$ generating a realistic synthetic dataset, $b)$ simulating data missingness, $c)$ recovering the missing data, and $d)$ analyzing imputation performance. Our methodology used a gaussian mixture model whose parameters were learned from a cleaned subset of a real demographic and health dataset to generate the synthetic data. We simulated various missingness degrees ranging from $10 \%$, $20 \%$, $30 \%$, and $40\%$ under the missing completely at random scheme MCAR. We used an integrated performance analysis framework involving clustering, classification and direct imputation analysis. Our results show that models trained on synthetic and imputed datasets could make predictions with an accuracy of $83 \%$ and $80 \%$ on $a) $ an unseen real dataset and $b)$ an unseen reserved synthetic test dataset, respectively. Moreover, the models that used the DAE method for imputed yielded the lowest log loss an indication of good performance, even though the accuracy measures were slightly lower. In conclusion, our work demonstrates that using our methodology, one can reverse engineer a solution to resolve missingness on an unseen dataset with missingness. Moreover, though we used a health dataset, our methodology can be utilized in other contexts.
translated by 谷歌翻译
鉴于生成对抗网络(GAN)的多功能性,我们试图了解使用现有的gan从现有的gan增强模拟图像并减少SIM卡之间的差距所带来的好处。我们在模拟机器人性能和基于图像的感知的背景下进行分析。具体而言,我们量化了GAN减少机器人技术图像感知差异的能力。使用语义细分,我们使用名义上和增强的城市环境模拟来分析训练和测试中的SIM对差异。作为次要应用,我们考虑使用GAN来增强室内环境。对于此应用,对象检测用于分析训练和测试的增强。提出的结果量化了使用GAN时SIM到真实差距的减少,并说明了其使用的好处。
translated by 谷歌翻译
该贡献的重点是摄像机模拟,因为它在模拟其虚拟原型制作时会发挥作用。我们根据感知算法的性能和测量性能的上下文提出了相机模型验证方法。这种方法与传统的合成图像验证不同,合成图像通常是在像素或特征级别进行的,并且倾向于需要匹配的一对合成图像和真实图像。由于获取配对图像的成本和限制很高,因此提出的方法基于不一定是配对的数据集。在真实和模拟数据集中,A和B分别在统计上找到了类似内容和法官的子集AC和BC子集AC和BC,从统计学上讲,感知算法对这些相似子集的响应。这种验证方法获得了性能相似性的统计度量,以及A和B的内容之间的相似性度量,使用Chrono ::传感器生成的图像和缩放自动驾驶汽车,使用对象检测器作为对象检测器作为量表来证明该方法。感知算法。结果证明了量化模拟和真实数据之间(i)差异的能力; (ii)减轻SIM到真实差距的训练方法的倾向; (iii)两个数据集之间的上下文重叠。
translated by 谷歌翻译
我们描述了一个软件框架和用于串联的硬件平台,用于设计和分析模拟和现实中机器人自主算法。该软件是开源的,独立的容器和操作系统(OS)的软件,具有三个主要组件:COS ++车辆仿真框架(Chrono)的ROS 2接口(Chrono),该框架提供了高保真的轮毂/跟踪的车辆和传感器仿真;基于ROS 2的基本基于算法设计和测试的自治堆栈;以及一个开发生态系统,可在感知,状态估计,路径计划和控制中进行可视化和硬件实验。随附的硬件平台是1/6刻度的车辆,并具有可重新配置的用于计算,传感和跟踪的可重新配置的安装。其目的是允许对算法和传感器配置进行物理测试和改进。由于该车辆平台在模拟环境中具有数字双胞胎,因此可以测试和比较模拟和现实中相同的算法和自主堆栈。该平台的构建是为了表征和管理模拟到现实差距。在此,我们描述了如何建立,部署和用于改善移动应用程序的自主权。
translated by 谷歌翻译
语言模型既展示了定量的改进,又展示了新的定性功能,随着规模的增加。尽管它们具有潜在的变革性影响,但这些新能力的特征却很差。为了为未来的研究提供信息,为破坏性的新模型能力做准备,并改善社会有害的效果,至关重要的是,我们必须了解目前和近乎未来的能力和语言模型的局限性。为了应对这一挑战,我们介绍了超越模仿游戏基准(Big Bench)。 Big Bench目前由204个任务组成,由132家机构的442位作者贡献。任务主题是多样的,从语言学,儿童发展,数学,常识性推理,生物学,物理学,社会偏见,软件开发等等。 Big-Bench专注于被认为超出当前语言模型的功能的任务。我们评估了OpenAI的GPT型号,Google内部密集变压器体系结构和大型基础上的开关稀疏变压器的行为,跨越了数百万到数十亿个参数。此外,一个人类专家评估者团队执行了所有任务,以提供强大的基准。研究结果包括:模型性能和校准都随规模改善,但绝对的术语(以及与评估者的性能相比);在模型类中的性能非常相似,尽管带有稀疏性。逐渐和预测的任务通常涉及大量知识或记忆成分,而在临界规模上表现出“突破性”行为的任务通常涉及多个步骤或组成部分或脆性指标;社交偏见通常会随着含糊不清的环境而随着规模而增加,但这可以通过提示来改善。
translated by 谷歌翻译
谷歌的运营洪水预测系统是制定的,为机构和公众提供准确的实时洪水警告,重点是河流洪水在大型潮流的河流中。它在2018年开始运作,自从地理位置扩展以来。该预测系统由四个子系统组成:数据验证,阶段预测,淹没建模和警报分配。机器学习用于两个子系统。阶段预测采用长短期内存(LSTM)网络和线性模型进行建模。使用阈值和歧管模型计算洪水淹没,前者计算淹没程度,后者计算淹没程度和深度。本文首次提供的歧管模型提供了一种机器学习替代洪水淹没的液压建模。在评估历史数据时,所有型号都可以实现可操作使用的足够高的度量指标。 LSTM表现出比线性模型更高的技能,而阈值和歧管模型达到了类似的性能度量,以便在淹没程度上进行建模。在2021年的季风季节期间,洪水预警系统在印度和孟加拉国运营,覆盖河流的洪水区,总面积287,000平方公里,拥有350多万人。超过100米的洪水警报被发送给受影响的人口,相关当局以及紧急组织。系统上的当前和未来的工作包括将覆盖范围扩展到额外的洪水易发位置,以及提高建模能力和准确性。
translated by 谷歌翻译